757 research outputs found

    A novel approach to quantify random error explicitly in epidemiological studies

    Get PDF
    The most frequently used methods for handling random error are largely misunderstood or misused by researchers. We propose a simple approach to quantify the amount of random error which does not require solid background in statistics for its proper interpretation. This method may help researchers refrain from oversimplistic interpretations relying on statistical significance

    Spike sorting for large, dense electrode arrays

    Get PDF
    Developments in microfabrication technology have enabled the production of neural electrode arrays with hundreds of closely spaced recording sites, and electrodes with thousands of sites are under development. These probes in principle allow the simultaneous recording of very large numbers of neurons. However, use of this technology requires the development of techniques for decoding the spike times of the recorded neurons from the raw data captured from the probes. Here we present a set of tools to solve this problem, implemented in a suite of practical, user-friendly, open-source software. We validate these methods on data from the cortex, hippocampus and thalamus of rat, mouse, macaque and marmoset, demonstrating error rates as low as 5%

    {\phi}^4 Solitary Waves in a Parabolic Potential: Existence, Stability, and Collisional Dynamics

    Full text link
    We explore a {\phi}^4 model with an added external parabolic potential term. This term dramatically alters the spectral properties of the system. We identify single and multiple kink solutions and examine their stability features; importantly, all of the stationary structures turn out to be unstable. We complement these with a dynamical study of the evolution of a single kink in the trap, as well as of the scattering of kink and anti-kink solutions of the model. We see that some of the key characteristics of kink-antikink collisions, such as the critical velocity and the multi-bounce windows, are sensitively dependent on the trap strength parameter, as well as the initial displacement of the kink and antikink

    The Value of p-Value in Biomedical Research

    Get PDF
    Significance tests and the corresponding p-values play a crucial role in decision making. In this commentary the meaning, interpretation and misinterpretation of p-values is presented. Alternatives for evaluating the reported evidence are also discussed

    Interpretation of evidence in data by untrained medical students: a scenario-based study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To determine which approach to assessment of evidence in data - statistical tests or likelihood ratios - comes closest to the interpretation of evidence by untrained medical students.</p> <p>Methods</p> <p>Empirical study of medical students (N = 842), untrained in statistical inference or in the interpretation of diagnostic tests. They were asked to interpret a hypothetical diagnostic test, presented in four versions that differed in the distributions of test scores in diseased and non-diseased populations. Each student received only one version. The intuitive application of the statistical test approach would lead to rejecting the null hypothesis of no disease in version A, and to accepting the null in version B. Application of the likelihood ratio approach led to opposite conclusions - against the disease in A, and in favour of disease in B. Version C tested the importance of the p-value (A: 0.04 versus C: 0.08) and version D the importance of the likelihood ratio (C: 1/4 versus D: 1/8).</p> <p>Results</p> <p>In version A, 7.5% concluded that the result was in favour of disease (compatible with p value), 43.6% ruled against the disease (compatible with likelihood ratio), and 48.9% were undecided. In version B, 69.0% were in favour of disease (compatible with likelihood ratio), 4.5% against (compatible with p value), and 26.5% undecided. Increasing the p value from 0.04 to 0.08 did not change the results. The change in the likelihood ratio from 1/4 to 1/8 increased the proportion of non-committed responses.</p> <p>Conclusions</p> <p>Most untrained medical students appear to interpret evidence from data in a manner that is compatible with the use of likelihood ratios.</p

    The null hypothesis significance test in health sciences research (1995-2006): statistical analysis and interpretation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The null hypothesis significance test (NHST) is the most frequently used statistical method, although its inferential validity has been widely criticized since its introduction. In 1988, the <it>International Committee of Medical Journal Editors </it>(ICMJE) warned against sole reliance on NHST to substantiate study conclusions and suggested supplementary use of confidence intervals (CI). Our objective was to evaluate the extent and quality in the use of NHST and CI, both in English and Spanish language biomedical publications between 1995 and 2006, taking into account the <it>International Committee of Medical Journal Editors </it>recommendations, with particular focus on the accuracy of the interpretation of statistical significance and the validity of conclusions.</p> <p>Methods</p> <p>Original articles published in three English and three Spanish biomedical journals in three fields (General Medicine, Clinical Specialties and Epidemiology - Public Health) were considered for this study. Papers published in 1995-1996, 2000-2001, and 2005-2006 were selected through a systematic sampling method. After excluding the purely descriptive and theoretical articles, analytic studies were evaluated for their use of NHST with P-values and/or CI for interpretation of statistical "significance" and "relevance" in study conclusions.</p> <p>Results</p> <p>Among 1,043 original papers, 874 were selected for detailed review. The exclusive use of P-values was less frequent in English language publications as well as in Public Health journals; overall such use decreased from 41% in 1995-1996 to 21% in 2005-2006. While the use of CI increased over time, the "significance fallacy" (to equate statistical and substantive significance) appeared very often, mainly in journals devoted to clinical specialties (81%). In papers originally written in English and Spanish, 15% and 10%, respectively, mentioned statistical significance in their conclusions.</p> <p>Conclusions</p> <p>Overall, results of our review show some improvements in statistical management of statistical results, but further efforts by scholars and journal editors are clearly required to move the communication toward ICMJE advices, especially in the clinical setting, which seems to be imperative among publications in Spanish.</p

    Decision-Making in Research Tasks with Sequential Testing

    Get PDF
    Background: In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low priors of the tested hypotheses and biases in the evaluation and publication of research findings increase the fraction of false positives. These findings raise concerns about the reliability of research. However, they are based on a very simple scenario of scientific research, where single tests are used to evaluate independent hypotheses. Methodology/Principal Findings: In this study, we present computer simulations and experimental approaches for analyzing more realistic scenarios. In these scenarios, research tasks are solved sequentially, i.e. subsequent tests can be chosen depending on previous results. We investigate simple sequential testing and scenarios where only a selected subset of results can be published and used for future rounds of test choice. Results from computer simulations indicate that for the tasks analyzed in this study, the fraction of false among the positive findings declines over several rounds of testing if the most informative tests are performed. Our experiments show that human subjects frequently perform the most informative tests, leading to a decline of false positives as expected from the simulations. Conclusions/Significance: For the research tasks studied here, findings tend to become more reliable over time. We also find that the performance in those experimental settings where not all performed tests could be published turned out to be surprisingly inefficient. Our results may help optimize existing procedures used in the practice of scientific research and provide guidance for the development of novel forms of scholarly communication.Engineering and Applied SciencesPsycholog
    corecore